Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Researchers have proposed several software reliability growth models, many of which possess complex parametric forms. In practice, software reliability growth models should exhibit a balance between predictive accuracy and other statistical measures of goodness of fit, yet past studies have not always performed such balanced assessment. This paper proposes a framework for software reliability growth models possessing a bathtub-shaped fault detection rate and derives stable and efficient expectation conditional maximization algorithms to enable the fitting of these models. The stages of the bathtub are interpreted in the context of the software testing process. The illustrations compare multiple bathtub-shaped and reduced model forms, including classical models with respect to predictive and information theoretic measures. The results indicate that software reliability growth models possessing a bathtub-shaped fault detection rate outperformed classical models on both types of measures. The proposed framework and models may therefore be a practical compromise between model complexity and predictive accuracy.more » « less
-
null (Ed.)With the increased interest to incorporate machine learning into software and systems, methods to characterize the impact of the reliability of machine learning are needed to ensure the reliability of the software and systems in which these algorithms reside. Towards this end, we build upon the architecture-based approach to software reliability modeling, which represents application reliability in terms of the component reliabilities and the probabilistic transitions between the components. Traditional architecture-based software reliability models consider all components to be deterministic software. We therefore extend this modeling approach to the case, where some components represent learning enabled components. Here, the reliability of a machine learning component is interpreted as the accuracy of its decisions, which is a common measure of classification algorithms. Moreover, we allow these machine learning components to be fault-tolerant in the sense that multiple diverse classifier algorithms are trained to guide decisions and the majority decision taken. We demonstrate the utility of the approach to assess the impact of machine learning on software reliability as well as illustrate the concept of reliability growth in machine learning. Finally, we validate past analytical results for a fault tolerant system composed of correlated components with real machine learning algorithms and data, demonstrating the analytical expression’s ability to accurately estimate the reliability of the fault tolerant machine learning component and subsequently the architecture-based software within which it resides.more » « less
-
null (Ed.)Traditional software reliability growth models only consider defect discovery data, yet the practical concern of software engineers is the removal of these defects. Most attempts to model the relationship between defect discovery and resolution have been restricted to differential equation-based models associated with these two activities. However, defect tracking databases offer a practical source of information on the defect lifecycle suitable for more complete reliability and performance models. This paper explicitly connects software reliability growth models to software defect tracking. Data from a NASA project has been employed to develop differential equation-based models of defect discovery and resolution as well as distributional and Markovian models of defect resolution. The states of the Markov model represent thirteen unique stages of the NASA software defect lifecycle. Both state transition probabilities and transition time distributions are computed from the defect database. Illustrations compare the predictive and computational performance of alternative approaches. The results suggest that the simple distributional approach achieves the best tradeoff between these two performance measures, but that enhanced data collection practices could improve the utility of the more advanced approaches and the inferences they enable.more » « less
An official website of the United States government
